132 research outputs found

    Algorithmes efficaces de gestion des règles dans les réseaux définis par logiciel

    Get PDF
    In software-defined networks (SDN), the filtering requirements for critical applications often vary according to flow changes and security policies. SDN addresses this issue with a flexible software abstraction, allowing simultaneous and convenient modification and implementation of a network policy on flow-based switches.With the increase in the number of entries in the ruleset and the size of data that traverses the network each second, it remains crucial to minimize the number of entries and accelerate the lookup process. On the other hand, attacks on Internet have reached a high level. The number keeps increasing, which increases the size of blacklists and the number of rules in firewalls. The limited storage capacity requires efficient management of that space. In the first part of this thesis, our primary goal is to find a simple representation of filtering rules that enables more compact rule tables and thus is easier to manage while keeping their semantics unchanged. The construction of rules should be obtained with reasonably efficient algorithms too. This new representation can add flexibility and efficiency in deploying security policies since the generated rules are easier to manage. A complementary approach to rule compression would be to use multiple smaller switch tables to enforce access-control policies in the network. However, most of them have a significant rules replication, or even they modify the packet's header to avoid matching a rule by a packet in the next switch. The second part of this thesis introduces new techniques to decompose and distribute filtering rule sets over a given network topology. We also introduce an update strategy to handle the changes in network policy and topology. In addition, we also exploit the structure of a series-parallel graph to efficiently resolve the rule placement problem for all-sized networks intractable time.Au sein des réseaux définis par logiciel (SDN), les exigences de filtrage pour les applications critiques varient souvent en fonction des changements de flux et des politiques de sécurité. SDN résout ce problème avec une abstraction logicielle flexible, permettant la modification et la mise en \oe{}uvre simultanées et pratiques d'une politique réseau sur les routeurs.Avec l'augmentation du nombre de règles de filtrage et la taille des données qui traversent le réseau chaque seconde, il est crucial de minimiser le nombre d'entrées et d'accélérer le processus de recherche. D'autre part, l'accroissement du nombre d'attaques sur Internet s'accompagne d'une augmentation de la taille des listes noires et du nombre de règles des pare-feux. Leur capacité de stockage limitée nécessite une gestion efficace de l'espace. Dans la première partie de cette thèse, nous proposons une représentation compacte des règles de filtrage tout en préservant leur sémantique. La construction de cette représentation est obtenue par des algorithmes raisonnablement efficaces. Cette technique permet flexibilité et efficacité dans le déploiement des politiques de sécurité puisque les règles engendrées sont plus faciles à gérer.Des approches complémentaires à la compression de règles consistent à décomposer et répartir les tables de règles, pour implémenter, par exemple, des politiques de contrôle d'accès distribué.Cependant, la plupart d'entre elles nécessitent une réplication importante de règles, voire la modification des en-têtes de paquets. La deuxième partie de cette thèse présente de nouvelles techniques pour décomposer et distribuer des ensembles de règles de filtrage sur une topologie de réseau donnée. Nous introduisons également une stratégie de mise à jour pour gérer les changements de politique et de topologie du réseau. De plus, nous exploitons également la structure de graphe série-parallèle pour résoudre efficacement le problème de placement de règles

    New Adaptive Data Transmission Scheme Over HF Radio

    Get PDF
    Acceptable Bit Error rate can be maintained by adapting some of the design parameters such as modulation, symbol rate, constellation size, and transmit power according to the channel state.<br />An estimate of HF propagation effects can be used to design an adaptive data transmission system over HF link. The proposed system combines the well known Automatic Link Establishment (ALE) together with variable rate transmission system. The standard ALE is modified to suite the required goal of selecting the best carrier frequency (channel) for a given transmission. This is based on measuring SINAD (Signal plus Noise plus Distortion to Noise plus Distortion), RSL (Received Signal Level), multipath phase distortion and BER (Bit Error Rate) for each channel in the frequency list. Channel condition evaluation is done by two arrangements. In the first an FFT analysis is used where a pilot signal is transmitted over the channel, while the data itself is used in the second arrangement. Passive channel assessment is used to avoid bad channels hence limiting the frequency pool size to be used in the point to point communication and the time required for scanning and linking. An exchange of channel information between the transmitting and receiving stations is considered to select the modulation scheme for transmission. Mainly MPSK and MFSK are considered with different levels giving different data rate according to the channel condition. The results of the computer simulation have shown that when transmitting at a fixed channel symbol rate of 1200 symbol/sec, the information rate ranges from 2400 bps using 4FSK up to 3600 bps using 8PSK for SNR ranges from 11dB up to 26dB.<br /

    ASSESSING THE PSYCHOMETRIC PROPERTIES OF THE MALAYSIAN VERSION OF PERCEIVED DIABETES SELF-MANAGEMENT SCALE FOR DIABETES MELLITUS

    Get PDF
    ABSTRACTObjective: The self-management of the chronic illnesses including diabetes mellitus (DM) contributes directly to the optimum outcomes. The selfmanagementofpatients livingwith DMis essential toachieveoptimal glycemiccontrolandtoavoidor forestallthemyriadtomanage thelong-termnegativeconsequences.This studyaimed toassess psychometricpropertiesoftheMalaysianversionofperceiveddiabetesself-management scale(PDSMS).Methods: This cross-sectional study recruited 314 adult diabetes patients (≥18-years-old; DM Type 1 or 2) attending Endocrine Clinic at KualaLumpur Hospital, Malaysia from July 2014 to January 2015, for the period of 6-month. Permission was obtained from the corresponding authorto translate the English version of PDSMS to Malay language (M-PDSMS). The final version of the questionnaire was self-administered among thepatients living with DM after taking their consent before their participation in this study. Psychometric properties were evaluated using the classicaltest theory: Cronbach's alpha (α), intraclass correlation (ICC), and construct validity by principle component analysis and the modern test theory(MTT): Realtime item reliability, person reliability, and item construct validity.Results: M-PDSMS proved to be internally consistent with good Cronbach α values for both pilot and real study (α=0.69, 0.77), respectively. ICC (0.75)for 1-month test-retest reliability proved the stability of the items. While in MTT, the realtime item reliability values also surpassed the good reliabilityindex of 0.70 for both pilot (α=0.93) and real study (α=0.97).Conclusion: M-PDSMS proved to be a valid and reliable questionnaire to assess the perceived diabetes self-management among the Malaysian DMpatients. The findings of the study should be replicated in other states of Malaysia to ensure the retention of good reliability and validity profile.Keywords: Self-management, Diabetes, Perceived diabetes self-management scale, Reliability, Validity, Modern test theory, Classical test theory, Rasch

    Minimizing Range Rules for Packet Filtering Using a Double Mask Representation

    Get PDF
    International audienceIn this work, we introduce a novel representation of packet filtering rules, so called double masks, where the first mask is used as an inclusion prefix and the second one for exclusion. An efficient algorithm is developed to compute a set of double masks for a given range

    Efficient Distribution of Security Policy Filtering Rules in Software Defined Networks

    Get PDF
    International audienceSoftware Defined Networks administrators can specify and smoothly deploy abstract network-wide policies, and then the controller acting as a central authority implements them in the flow tables of the network switches. The rule sets of these policies are specified in the forwarding tables, which are usually accessed using very expensive and power-hungry ternary content-addressable memory (TCAM). Consequently, a given table can only contain a limited number of rules. However, various applications need large rule sets to perform filtering on diverse flows. In this paper, we propose several algorithms for decomposing and distributing a rule set on network switches of limited flow tables size, while preserving the network policy semantics. Through experiments on several rule sets with single and multiple dimensions, we evaluate and analyse the performance of our rule placement techniques. Our results show that our proposals are efficient in practice

    Double Mask: An efficient rule encoding for Software Defined Networking

    Get PDF
    International audiencePacket filtering is widely used in multiple networking appliances and applications, in particular, to block malicious traffic (protect network infrastructures through firewalls and intrusion detection systems) and to be deployed on routers, switches and load balancers for packet classification. This mechanism relies on the packet's header fields to filter such traffic by using range rules of IP addresses or ports. However, the set of packet filters has to handle a growing number of connected nodes and many of them are compromised and used as sources of attacks. For instance, IP filter sets available in blacklists may reach several millions of entries, and may require large memory space for their storage in filtering appliances. In this paper, we propose a new method based on a double mask IP prefix representation together with a linear transformation algorithm to build a minimized set of range rules. This representation makes the network more secure, reliable and easy to maintain and configure. We define formally the double mask representation over range rules. We show empirically that the proposed method achieves an average compression ratio of 11% on real-life blacklists and up to 74% on synthetic range rule sets. Finally, we evaluate the performance of our double masks representation through an OpenFlow based implementation with an SDN testbed using real hardware. Our results show that our technique is capable of significantly reducing the matching time in the controller when compression ratios are higher than 15% leading to a faster response time, and a good balance between matching time and memory space in the switch

    Minimizing Range Rules for Packet Filtering Using Double Mask Representation

    Get PDF
    Packet filtering is widely used in multiple networking appliances and applications, in particular, to block malicious traffic (protect network infrastructures through fire-walls and intrusion detection systems) and to be deployed on routers, switches and load balancers for packet classification. This mechanism relies on the packet's header fields to filter such traffic by using range rules of IP addresses or ports. However, the set of packet filters has to handle a growing number of connected nodes and many of them are compromised and used as sources of attacks. For instance, IP filter sets available in blacklists may reach several millions of entries, and may require large memory space for their storage in filtering appliances. In this paper, we propose a new method based on a double mask IP prefix representation together with a linear transformation algorithm to build a minimized set of range rules. We define formally the double mask representation over range rules and we prove that the number of required masks for any range is at most 2w − 4, where w is the length of a field. This representation makes the network more secure, reliable and easy to maintain and configure. We define formally the double mask representation over range rules. We show empirically that the proposed method achieves an average compression ratio of 11% on real-life blacklists and up to 74% on synthetic range rule sets.Finally, we add support of double mask into a real SDN network
    • …
    corecore